正如GPT-3和T5所证明的那样,随着参数空间变得越来越大,变压器具有能力。但是,对于需要大量知识的任务,非参数存储器允许模型在计算成本和GPU内存需求的次线性增加中急剧增长。诸如RAG和Realm之类的最新模型已将检索引入条件生成。这些模型结合了从一系列语料库中的神经初始检索。我们基于这一研究,提出了RE2G,该研究将神经初始检索和重新融合到基于巴特的序列到序列的生成中。我们的阅读方法还允许从无与伦比分数的来源合并结果,从而实现BM25和神经初始检索的合奏。为了训练我们的系统端到端,我们引入了一种新颖的知识蒸馏变体,以在目标序列输出上仅使用地面真理来训练初始检索,重读者和生成。我们在四个不同的任务中发现了很大的收益:零击插槽填充,问答,事实检查和对话,相对增长了9%至34%,比以前的苏格兰短裙排行榜上的最先前的排行榜相比。我们将代码作为开源提供,网址为https://github.com/ibm/kgi-slot-filling/tree/re2g。
translated by 谷歌翻译
研究部门在组织中推动创新的重要作用。随着速度和量的信息增长,绘制见解,跟随趋势,保持新的研究以及制定策略的配制策略越来越越来越具有挑战性。在本文中,我们介绍了一个用例,即公司研究界如何利用语义网络技术来诱导从结构化和文本数据中诱导统一的知识图,通过整合与研究项目相关的社区使用的各种应用程序,学术论文,学术论文,数据集,成就和认可。为了使应用程序开发人员更容易访问知识图,我们确定了一组通用模式,用于利用诱导的知识并将其视为API。这些模式是从用户研究中诞生的,这些模式确定了最有价值的用例或用户疼痛点要缓解。我们概述了两个不同的方案:用于业务使用的建议和分析。我们将详细讨论这些方案,并针对实体建议提供经验评估。所使用的方法和从这项工作中学到的教训可以应用于面临类似挑战的其他组织。
translated by 谷歌翻译
在本文中,我们介绍了一个系统,以展示最新的最新检索增强生成模型的功能,该模型接受了知识密集型语言任务的培训,例如插槽填充,开放式域问题答案,对话和事实检查。此外,鉴于用户查询,我们显示如何将这些不同模型的输出组合在一起以互相盘问彼此的输出。特别是,我们展示了使用问题答案模型如何提高对话的准确性。我们还将发布演示中使用的所有模型作为本文的贡献。一个简短的视频,展示了该系统,请访问https://ibm.box.com/v/emnlp2022-demo。
translated by 谷歌翻译
快速准确地检测该疾病可以大大帮助减少任何国家医疗机构对任何大流行期间死亡率降低死亡率的压力。这项工作的目的是使用新型的机器学习框架创建多模式系统,该框架同时使用胸部X射线(CXR)图像和临床数据来预测COVID-19患者的严重程度。此外,该研究还提出了一种基于nom图的评分技术,用于预测高危患者死亡的可能性。这项研究使用了25种生物标志物和CXR图像,以预测意大利第一波Covid-19(3月至6月2020年3月至6月)在930名Covid-19患者中的风险。提出的多模式堆叠技术分别产生了89.03%,90.44%和89.03%的精度,灵敏度和F1分数,以识别低风险或高危患者。与CXR图像或临床数据相比,这种多模式方法可提高准确性6%。最后,使用多元逻辑回归的列线图评分系统 - 用于对第一阶段确定的高风险患者的死亡风险进行分层。使用随机森林特征选择模型将乳酸脱氢酶(LDH),O2百分比,白细胞(WBC)计数,年龄和C反应蛋白(CRP)鉴定为有用的预测指标。开发了五个预测因素参数和基于CXR图像的列函数评分,以量化死亡的概率并将其分为两个风险组:分别存活(<50%)和死亡(> = 50%)。多模式技术能够预测F1评分为92.88%的高危患者的死亡概率。开发和验证队列曲线下的面积分别为0.981和0.939。
translated by 谷歌翻译
Automatic medical image classification is a very important field where the use of AI has the potential to have a real social impact. However, there are still many challenges that act as obstacles to making practically effective solutions. One of those is the fact that most of the medical imaging datasets have a class imbalance problem. This leads to the fact that existing AI techniques, particularly neural network-based deep-learning methodologies, often perform poorly in such scenarios. Thus this makes this area an interesting and active research focus for researchers. In this study, we propose a novel loss function to train neural network models to mitigate this critical issue in this important field. Through rigorous experiments on three independently collected datasets of three different medical imaging domains, we empirically show that our proposed loss function consistently performs well with an improvement between 2%-10% macro f1 when compared to the baseline models. We hope that our work will precipitate new research toward a more generalized approach to medical image classification.
translated by 谷歌翻译
We propose KnowGL, a tool that allows converting text into structured relational data represented as a set of ABox assertions compliant with the TBox of a given Knowledge Graph (KG), such as Wikidata. We address this problem as a sequence generation task by leveraging pre-trained sequence-to-sequence language models, e.g. BART. Given a sentence, we fine-tune such models to detect pairs of entity mentions and jointly generate a set of facts consisting of the full set of semantic annotations for a KG, such as entity labels, entity types, and their relationships. To showcase the capabilities of our tool, we build a web application consisting of a set of UI widgets that help users to navigate through the semantic data extracted from a given input text. We make the KnowGL model available at https://huggingface.co/ibm/knowgl-large.
translated by 谷歌翻译
与其2D图像对应物相比,3D点云数据上的零射击学习是一个相关的未置换问题。 3D数据由于不可用的预训练特征提取模型而带来了ZSL的新挑战。为了解决这个问题,我们提出了一种及时引导的3D场景生成和监督方法,该方法可以增强3D数据以更好地学习网络,从而探索可见和看不见的对象的复杂相互作用。首先,我们以提示描述的某些方式合并了两个3D模型的点云。提示的行为就像描述每个3D场景的注释一样。后来,我们进行对比学习,以端到端的方式培训我们所提出的建筑。我们认为,与单​​个对象相比,3D场景可以更有效地关联对象,因为当对象出现在上下文中时,流行的语言模型(如Bert)可以实现高性能。我们提出的及时引导场景生成方法封装了数据扩展和基于及时的注释/字幕,以提高3D ZSL性能。我们已经在合成(ModelNet40,ModelNet10)和实扫描(ScanoJbectnn)3D对象数据集上实现了最新的ZSL和广义ZSL性能。
translated by 谷歌翻译
Climate change has increased the intensity, frequency, and duration of extreme weather events and natural disasters across the world. While the increased data on natural disasters improves the scope of machine learning (ML) in this field, progress is relatively slow. One bottleneck is the lack of benchmark datasets that would allow ML researchers to quantify their progress against a standard metric. The objective of this short paper is to explore the state of benchmark datasets for ML tasks related to natural disasters, categorizing them according to the disaster management cycle. We compile a list of existing benchmark datasets introduced in the past five years. We propose a web platform - NADBenchmarks - where researchers can search for benchmark datasets for natural disasters, and we develop a preliminary version of such a platform using our compiled list. This paper is intended to aid researchers in finding benchmark datasets to train their ML models on, and provide general directions for topics where they can contribute new benchmark datasets.
translated by 谷歌翻译
Handwriting Recognition has been a field of great interest in the Artificial Intelligence domain. Due to its broad use cases in real life, research has been conducted widely on it. Prominent work has been done in this field focusing mainly on Latin characters. However, the domain of Arabic handwritten character recognition is still relatively unexplored. The inherent cursive nature of the Arabic characters and variations in writing styles across individuals makes the task even more challenging. We identified some probable reasons behind this and proposed a lightweight Convolutional Neural Network-based architecture for recognizing Arabic characters and digits. The proposed pipeline consists of a total of 18 layers containing four layers each for convolution, pooling, batch normalization, dropout, and finally one Global average pooling and a Dense layer. Furthermore, we thoroughly investigated the different choices of hyperparameters such as the choice of the optimizer, kernel initializer, activation function, etc. Evaluating the proposed architecture on the publicly available 'Arabic Handwritten Character Dataset (AHCD)' and 'Modified Arabic handwritten digits Database (MadBase)' datasets, the proposed model respectively achieved an accuracy of 96.93% and 99.35% which is comparable to the state-of-the-art and makes it a suitable solution for real-life end-level applications.
translated by 谷歌翻译
目标:探索深度学习算法进一步简化和优化尿道板(UP)质量评估的能力,使用板客观评分工具(POST),旨在提高Hypospadias修复中提高评估的客观性和可重复性。方法:五个关键的邮政地标是由专家在691图像数据集中的专家标记,该数据集接受了原发性杂质修复的青春期前男孩。然后,该数据集用于开发和验证基于深度学习的地标检测模型。提出的框架始于瞥见和检测,其中输入图像是使用预测的边界框裁剪的。接下来,使用深层卷积神经网络(CNN)体系结构来预测五个邮政标记的坐标。然后,这些预测的地标用于评估远端催化性远端的质量。结果:所提出的模型准确地定位了gan区域,平均平均精度(地图)为99.5%,总体灵敏度为99.1%。在预测地标的坐标时,达到了0.07152的归一化平均误差(NME),平均平方误差(MSE)为0.001,在0.1 nme的阈值下为20.2%的故障率。结论:此深度学习应用程序在使用邮政评估质量时表现出鲁棒性和高精度。使用国际多中心基于图像的数据库进行进一步评估。外部验证可以使深度学习算法受益,并导致更好的评估,决策和对手术结果的预测。
translated by 谷歌翻译